157 research outputs found

    Mining Firm-level Uncertainty in Stock Market: A Text Mining Approach

    Get PDF
    The traditional finance paradigm seeks to understand uncertainty and their impact on stock market. However, most previous studies try to quantify uncertainty at macro-level such as the EPU index. There are few studies tapping into firm-level uncertainty. In this paper, we address this empirical anomaly by integrating text mining tools to measure the firm-level uncertainty score from news content. We focus on companies listed in S&P 1500. We crawled a total of 2,196,975 news articles from LexisNexis database from April 2007 to July 2017. We extracted uncertainty related information as features by using named entity extraction, LM dictionary, and other linguistic features. We employed nonlinear machine learning models to investigate the impact on stocks future returns by uncertainty-related features. To address the theoretical problem, we use traditional asset pricing techniques to test the relationship among information derived uncertainty and the financial market performance

    Sample-Efficient Multi-Agent RL: An Optimization Perspective

    Full text link
    We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation. In order to find the minimum assumption for sample-efficient learning, we introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs. Using this measure, we propose the first unified algorithmic framework that ensures sample efficiency in learning Nash Equilibrium, Coarse Correlated Equilibrium, and Correlated Equilibrium for both model-based and model-free MARL problems with low MADC. We also show that our algorithm provides comparable sublinear regret to the existing works. Moreover, our algorithm combines an equilibrium-solving oracle with a single objective optimization subprocedure that solves for the regularized payoff of each deterministic joint policy, which avoids solving constrained optimization problems within data-dependent constraints (Jin et al. 2020; Wang et al. 2023) or executing sampling procedures with complex multi-objective optimization problems (Foster et al. 2023), thus being more amenable to empirical implementation

    A Real-Time Monitoring System of Industry Carbon Monoxide Based on Wireless Sensor Networks

    Get PDF
    Carbon monoxide (CO) burns or explodes at over-standard concentration. Hence, in this paper, a Wifi-based, real-time monitoring of a CO system is proposed for application in the construction industry, in which a sensor measuring node is designed by low-frequency modulation method to acquire CO concentration reliably, and a digital filtering method is adopted for noise filtering. According to the triangulation, the Wifi network is constructed to transmit information and determine the position of nodes. The measured data are displayed on a computer or smart phone by a graphical interface. The experiment shows that the monitoring system obtains excellent accuracy and stability in long-term continuous monitoring

    DASA: Difficulty-Aware Semantic Augmentation for Speaker Verification

    Full text link
    Data augmentation is vital to the generalization ability and robustness of deep neural networks (DNNs) models. Existing augmentation methods for speaker verification manipulate the raw signal, which are time-consuming and the augmented samples lack diversity. In this paper, we present a novel difficulty-aware semantic augmentation (DASA) approach for speaker verification, which can generate diversified training samples in speaker embedding space with negligible extra computing cost. Firstly, we augment training samples by perturbing speaker embeddings along semantic directions, which are obtained from speaker-wise covariance matrices. Secondly, accurate covariance matrices are estimated from robust speaker embeddings during training, so we introduce difficultyaware additive margin softmax (DAAM-Softmax) to obtain optimal speaker embeddings. Finally, we assume the number of augmented samples goes to infinity and derive a closed-form upper bound of the expected loss with DASA, which achieves compatibility and efficiency. Extensive experiments demonstrate the proposed approach can achieve a remarkable performance improvement. The best result achieves a 14.6% relative reduction in EER metric on CN-Celeb evaluation set.Comment: Accepted by ICASSP 202

    Hierarchical Reinforcement Learning under Mixed Observability

    Full text link
    The framework of mixed observable Markov decision processes (MOMDP) models many robotic domains in which some state variables are fully observable while others are not. In this work, we identify a significant subclass of MOMDPs defined by how actions influence the fully observable components of the state and how those, in turn, influence the partially observable components and the rewards. This unique property allows for a two-level hierarchical approach we call HIerarchical Reinforcement Learning under Mixed Observability (HILMO), which restricts partial observability to the top level while the bottom level remains fully observable, enabling higher learning efficiency. The top level produces desired goals to be reached by the bottom level until the task is solved. We further develop theoretical guarantees to show that our approach can achieve optimal and quasi-optimal behavior under mild assumptions. Empirical results on long-horizon continuous control tasks demonstrate the efficacy and efficiency of our approach in terms of improved success rate, sample efficiency, and wall-clock training time. We also deploy policies learned in simulation on a real robot.Comment: Accepted at the 15th International Workshop on the Algorithmic Foundations of Robotics (WAFR) 2022, University of Maryland, College Park. The first two authors contributed equall

    Quality Index for Stereoscopic Images by Separately Evaluating Adding and Subtracting

    Get PDF
    The human visual system (HVS) plays an important role in stereo image quality perception. Therefore, it has aroused many people’s interest in how to take advantage of the knowledge of the visual perception in image quality assessment models. This paper proposes a full-reference metric for quality assessment of stereoscopic images based on the binocular difference channel and binocular summation channel. For a stereo pair, the binocular summation map and binocular difference map are computed first by adding and subtracting the left image and right image. Then the binocular summation is decoupled into two parts, namely additive impairments and detail losses. The quality of binocular summation is obtained as the adaptive combination of the quality of detail losses and additive impairments. The quality of binocular summation is computed by using the Contrast Sensitivity Function (CSF) and weighted multi-scale (MS-SSIM). Finally, the quality of binocular summation and binocular difference is integrated into an overall quality index. The experimental results indicate that compared with existing metrics, the proposed metric is highly consistent with the subjective quality assessment and is a robust measure. The result have also indirectly proved hypothesis of the existence of binocular summation and binocular difference channels
    • …
    corecore